Support Vector Machines via Advanced Optimization Techniques

نویسنده

  • Eitan Rubinstein
چکیده

The most basic problem considered in Machine Learning is the supervised binary data classification, where one is given a training sample – a random sample (xi, yi), i = 1, ..., ` of examples – attribute vectors xi equipped with labels yi = ±1 – drawn from unknown distribution P , and the goal is to build, based on this sample, a classifier – a function f(x) taking values ±1 such that the generalization error ProbP {(x, y) : f(x) 6= y} is as small as possible. This general problem has numerous applications in classification of documents, texts and images, in computerized medical diagnostic systems, etc. The SVM approach is the most frequently used technique for solving the outlined problem (same as other classification problems arising in Machine Learning). With this approach the classifier is given by an optimal solution to a specific convex optimization problem. While the theoretical background of SVM is given by a well-developed and deep Statistical Learning Theory, the computational tools used traditionally in the SVM context (that is, numerical techniques for solving the resulting convex programs) are not exactly state-of-the-art of modern Convex Optimization, especially when the underlying data sets are very large (tens of thousands of examples with high-dimensional attribute vectors in the training sample). In the large-scale case, one cannot use the most advanced, in terms of the rate of convergence, convex optimization techniques (Interior Point methods), since the cost of an iteration in such a method (which grows nonlinearly with the sizes of the training data) becomes too large to be practical. At the same time, from the purely optimization viewpoint, the “computationally cheap” optimization methods routinely used in the large-scale SVM’s seem to be somehow obsolete. The goal of the thesis is to investigate the potential of recent advances in “computationally cheap” techniques for extremely large-scale convex optimization in the SVM context. Specifically, we intend to utilize here the Mirror Prox algorithm (A. Nemirovski, 2004). The research will focus on (a) representing the SVM optimization programs in the specific saddle point form required by the algorithm, (b) adjusting the algorithm to the structure of the SVM problems, and (c) software implementation and testing the algorithm on large-scale SVM data, with the ultimate goal to develop a novel theoretically solid and computationally efficient optimization techniques for SVM-based supervised binary classification.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A QUADRATIC MARGIN-BASED MODEL FOR WEIGHTING FUZZY CLASSIFICATION RULES INSPIRED BY SUPPORT VECTOR MACHINES

Recently, tuning the weights of the rules in Fuzzy Rule-Base Classification Systems is researched in order to improve the accuracy of classification. In this paper, a margin-based optimization model, inspired by Support Vector Machine classifiers, is proposed to compute these fuzzy rule weights. This approach not only  considers both accuracy and generalization criteria in a single objective fu...

متن کامل

When to Cache Block Sparse Matrix Multiplication: A Statistical Learning Approach

In previous work it was found that cache blocking of sparse matrix vector multiplication yielded significant performance improvements (upto 700% on some matrix and platform combinations) however deciding when to apply the optimization is a non-trivial problem. This paper applies four different statistical learning techniques to explore this classification problem. The statistical techniques use...

متن کامل

A Comparative Study of Extreme Learning Machines and Support Vector Machines in Prediction of Sediment Transport in Open Channels

The limiting velocity in open channels to prevent long-term sedimentation is predicted in this paper using a powerful soft computing technique known as Extreme Learning Machines (ELM). The ELM is a single Layer Feed-forward Neural Network (SLFNN) with a high level of training speed. The dimensionless parameter of limiting velocity which is known as the densimetric Froude number (Fr) is predicte...

متن کامل

A Random Sampling Technique for Training Support Vector Machines (For Primal-Form Maximal-Margin Classifiers)

Random sampling techniques have been developed for combinatorial optimization problems. In this note, we report an application of one of these techniques for training support vector machines (more precisely, primal-form maximal-margin classifiers) that solve two-group classification problems by using hyperplane classifiers. Through this research, we are aiming (I) to design efficient and theore...

متن کامل

Particle swarm optimization for linear support vector machines based classifier selection

Particle swarm optimization is a metaheuristic technique widely applied to solve various optimization problems as well as parameter selection problems for various classification techniques. This paper presents an approach for linear support vector machines classifier optimization combining its selection from a family of similar classifiers with parameter optimization. Experimental results indic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007